ai ml model
Can Artificial Intelligence Accelerate Technological Progress? Researchers' Perspectives on AI in Manufacturing and Materials Science
Nelson, John P., Olugbade, Olajide, Shapira, Philip, Biddle, Justin B.
Applications of artificial intelligence or machine learning in research Modes of use Surrogate modeling for physics - based models Modeling of poorly understood phenomena Data preprocessing Large language model use Applications AI/ML as research tool Production process design, monitoring, & output prediction Part design & properties prediction Materials design & properties prediction AI/ML as research product Generative AI design tool for consumers Generic research tasks Large language models for coding Large language models for literature review Benefits of artificial intelligence or machine learning in research Reduction in accuracy/cost/speed trade - off in research, especially computer modeling Reduced computation time Replacing experimentation Reducing need for computationally intensive, physics - based models Saving research labor Exploring larger design spaces Address of previously unsolvable problems Model poorly understood relationships between variables Identify human - unidentifiable patterns or phenomena Downsides of artificial intelligence or machine learning in research Accuracy weaknesses Predict poorly outside regions of dense, high - quality training data Interpretability weaknesses Bounds of accuracy can be unclear Accuracy assessment can be difficult Long - run scientific progress concerns AI/ML cannot develop novel scientific theory AI/ML may bypass opportunities to identify empirical or theoretical novelties Resource issues Data acquisition and cleaning is time - intensive AI/ML models are computation - and energy - intensive to develop Inappropriate use issues Easy to over - trust May be inappropriately used to address problems soluble with simpler methods 8 Second, AI/ML models can be trained on input and output data for phenomena (e.g., complex production processes) which lack robust theoretical models, developing novel predictive capabilities in the absence of explicit, human - designed theory. This is somet imes referred to as "phenomenological modeling," as it attempts to model phenomena in the absence of mechanistic, explanatory understanding: [T]he first reason we choose to use AI is because we don't have a good model of what our system is. . . I get a bunch of data coming in and I have a bunch of sensor readings, you know. . . And I use the AI to map the bunch of sensor readings to the process health or process status or machine status that I have.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Virginia (0.04)
- (13 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (0.68)
- Research Report > Experimental Study (0.68)
On AI Verification in Open RAN
Soundrarajan, Rahul, Fiandrino, Claudio, Polese, Michele, D'Oro, Salvatore, Bonati, Leonardo, Melodia, Tommaso
Open RAN introduces a flexible, cloud-based architecture for the Radio Access Network (RAN), enabling Artificial Intelligence (AI)/Machine Learning (ML)-driven automation across heterogeneous, multi-vendor deployments. While EXplainable Artificial Intelligence (XAI) helps mitigate the opacity of AI models, explainability alone does not guarantee reliable network operations. In this article, we propose a lightweight verification approach based on interpretable models to validate the behavior of Deep Reinforcement Learning (DRL) agents for RAN slicing and scheduling in Open RAN. Specifically, we use Decision Tree (DT)-based verifiers to perform near-real-time consistency checks at runtime, which would be otherwise unfeasible with computationally expensive state-of-the-art verifiers. We analyze the landscape of XAI and AI verification, propose a scalable architectural integration, and demonstrate feasibility with a DT-based slice-verifier. We also outline future challenges to ensure trustworthy AI adoption in Open RAN.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
- Asia > India > Karnataka > Bengaluru (0.04)
- Africa > Mozambique > Gaza Province > Xai-Xai (0.04)
- Telecommunications (0.89)
- Information Technology (0.88)
AI/ML Life Cycle Management for Interoperable AI Native RAN
Huang, Chu-Hsiang, Wen, Chao-Kai, Li, Geoffrey Ye
--Artificial intelligence (AI) and machine learning (ML) models are rapidly permeating the 5G Radio Access Network (RAN), powering beam management, channel state information (CSI) feedback, positioning, and mobility prediction. However, without a standardized life-cycle management (LCM) framework, challenges, such as model drift, vendor lock-in, and limited transparency, hinder large-scale adoption. Beginning with the Network Data Analytics Function (NWDAF) in Rel-16, subsequent releases introduced standardized interfaces for model transfer, execution, performance monitoring, and closed-loop control, culminating in Rel-20's two-sided CSI-compression Work Item and vendor-agnostic LCM profile. This article reviews the resulting five-block LCM architecture, KPI-driven monitoring mechanisms, and inter-vendor collaboration schemes, while identifying open challenges in resource-efficient monitoring, environment drift detection, intelligent decision-making, and flexible model training. These developments lay the foundation for AI-native transceivers as a key enabler for 6G. C.-H. Huang is with the Department of Electrical Engineering, National Taiwan University National Taiwan University, Taipei 10617, Taiwan, Email: chuhsianh@ntu.edu.tw. C.-K. Wen is with the Institute of Communications Engineering, National Sun Y at-sen University, Kaohsiung 80424, Taiwan, Email: chaokai.wen@mail.nsysu.edu.tw. Li is with the Department of Electrical and Electronic Engineering, Imperial College London, SW7 2AZ London, U.K., Email: geoffrey.li@imperial.ac.uk. This work has been submitted to the IEEE for possible publication. RTIFICIAL intelligence (AI) and machine learning (ML) have demonstrated significant potential in enhancing radio access network (RAN) performance, particularly for nonlinear and analytically complex tasks, such as beam management [1], channel state information (CSI) feedback [2], [3], positioning [4], and mobility prediction [5].
- Europe > United Kingdom > England > Greater London > London (0.24)
- Asia > Taiwan > Takao Province > Kaohsiung (0.24)
- Asia > Taiwan > Taiwan Province > Taipei (0.24)
- Research Report (1.00)
- Overview (0.88)
Towards AI-Native RAN: An Operator's Perspective of 6G Day 1 Standardization
Li, Nan, Sun, Qi, Wang, Lehan, Xu, Xiaofei, Huang, Jinri, Liu, Chunhui, Gao, Jing, Huang, Yuhong, I, Chih-Lin
Artificial Intelligence/Machine Learning (AI/ML) has become the most certain and prominent feature of 6G mobile networks. Unlike 5G, where AI/ML was not natively integrated but rather an add-on feature over existing architecture, 6G shall incorporate AI from the onset to address its complexity and support ubiquitous AI applications. Based on our extensive mobile network operation and standardization experience from 2G to 5G, this paper explores the design and standardization principles of AI-Native radio access networks (RAN) for 6G, with a particular focus on its critical Day 1 architecture, functionalities and capabilities. We investigate the framework of AI-Native RAN and present its three essential capabilities to shed some light on the standardization direction; namely, AI-driven RAN processing/optimization/automation, reliable AI lifecycle management (LCM), and AI-as-a-Service (AIaaS) provisioning. The standardization of AI-Native RAN, in particular the Day 1 features, including an AI-Native 6G RAN architecture, were proposed. For validation, a large-scale field trial with over 5000 5G-A base stations have been built and delivered significant improvements in average air interface latency, root cause identification, and network energy consumption with the proposed architecture and the supporting AI functions. This paper aims to provide a Day 1 framework for 6G AI-Native RAN standardization design, balancing technical innovation with practical deployment.
- Energy (1.00)
- Information Technology > Security & Privacy (0.93)
- Telecommunications > Networks (0.68)
- Information Technology > Networks (0.68)
Position Paper: Rethinking AI/ML for Air Interface in Wireless Networks
Kontes, Georgios, Michalopoulos, Diomidis S., Ghimire, Birendra, Mutschler, Christopher
AI/ML research has predominantly been driven by domains such as computer vision, natural language processing, and video analysis. In contrast, the application of AI/ML to wireless networks, particularly at the air interface, remains in its early stages. Although there are emerging efforts to explore this intersection, fully realizing the potential of AI/ML in wireless communications requires a deep interdisciplinary understanding of both fields. We provide an overview of AI/ML-related discussions in 3GPP standardization, highlighting key use cases, architectural considerations, and technical requirements. We outline open research challenges and opportunities where academic and industrial communities can contribute to shaping the future of AI-enabled wireless systems.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > France (0.04)
- Overview (0.54)
- Research Report (0.40)
Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations
Alattal, Dima, Azar, Asal Khoshravan, Myles, Puja, Branson, Richard, Abdulhussein, Hatim, Tucker, Allan
There is a growing demand for the use of Artificial Intelligence (AI) and Machine Learning (ML) in healthcare, particularly as clinical decision support systems to assist medical professionals. However, the complexity of many of these models, often referred to as black box models, raises concerns about their safe integration into clinical settings as it is difficult to understand how they arrived at their predictions. This paper discusses insights and recommendations derived from an expert working group convened by the UK Medicine and Healthcare products Regulatory Agency (MHRA). The group consisted of healthcare professionals, regulators, and data scientists, with a primary focus on evaluating the outputs from different AI algorithms in clinical decision-making contexts. Additionally, the group evaluated findings from a pilot study investigating clinicians' behaviour and interaction with AI methods during clinical diagnosis. Incorporating AI methods is crucial for ensuring the safety and trustworthiness of medical AI devices in clinical settings. Adequate training for stakeholders is essential to address potential issues, and further insights and recommendations for safely adopting AI systems in healthcare settings are provided.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Michigan (0.04)
- Europe > United Kingdom > England > West Midlands (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
Role and Use of Race in AI/ML Models Related to Health
Were, Martin C., Li, Ang, Malin, Bradley A., Yin, Zhijun, Coco, Joseph R., Collins, Benjamin X., Clayton, Ellen Wright, Novak, Laurie L., Hendricks-Sturrup, Rachele, Oluyomi, Abiodun, Anders, Shilo, Yan, Chao
The role and use of race within health - related artificial intelligence and machine learning (AI/ML) models has sparked increasing attention and controversy. Despite the complexity and breadth of related issues, a robust and holistic framework to guide stakeholders in their examination and resolution remains lacking . This perspective provides a broad - based, systematic, and cross - cutting landscape analysis of race - related challenges, structured around the AI/ML lifecycle and framed through " p oints to c onsider " to support inquiry and decision - making. INTRODUCTION The role and use of the social construct of race within health - related artificial intelligence and machine learning (AI/ML) models has become a subject of increased attention and controversy. As noted in the National Academies recent report " Ending Unequal Treatment ", it is increasingly clear that race in all its complexity is a powerful predictor of unequal treatment and health care outcomes.
- North America > United States > District of Columbia > Washington (0.14)
- North America > United States > Tennessee > Davidson County > Nashville (0.05)
- South America > Brazil (0.04)
- (6 more...)
Enabling AutoML for Zero-Touch Network Security: Use-Case Driven Analysis
Yang, Li, Rajab, Mirna El, Shami, Abdallah, Muhaidat, Sami
Zero-Touch Networks (ZTNs) represent a state-of-the-art paradigm shift towards fully automated and intelligent network management, enabling the automation and intelligence required to manage the complexity, scale, and dynamic nature of next-generation (6G) networks. ZTNs leverage Artificial Intelligence (AI) and Machine Learning (ML) to enhance operational efficiency, support intelligent decision-making, and ensure effective resource allocation. However, the implementation of ZTNs is subject to security challenges that need to be resolved to achieve their full potential. In particular, two critical challenges arise: the need for human expertise in developing AI/ML-based security mechanisms, and the threat of adversarial attacks targeting AI/ML models. In this survey paper, we provide a comprehensive review of current security issues in ZTNs, emphasizing the need for advanced AI/ML-based security mechanisms that require minimal human intervention and protect AI/ML models themselves. Furthermore, we explore the potential of Automated ML (AutoML) technologies in developing robust security solutions for ZTNs. Through case studies, we illustrate practical approaches to securing ZTNs against both conventional and AI/ML-specific threats, including the development of autonomous intrusion detection systems and strategies to combat Adversarial ML (AML) attacks. The paper concludes with a discussion of the future research directions for the development of ZTN security approaches.
- North America > Canada > Ontario > Middlesex County > London (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Spain > Galicia > Madrid (0.04)
- (8 more...)
- Overview (1.00)
- Research Report > Promising Solution (0.93)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.30)
Towards Zero Touch Networks: Cross-Layer Automated Security Solutions for 6G Wireless Networks
Yang, Li, Naser, Shimaa, Shami, Abdallah, Muhaidat, Sami, Ong, Lyndon, Debbah, Mérouane
The transition from 5G to 6G mobile networks necessitates network automation to meet the escalating demands for high data rates, ultra-low latency, and integrated technology. Recently, Zero-Touch Networks (ZTNs), driven by Artificial Intelligence (AI) and Machine Learning (ML), are designed to automate the entire lifecycle of network operations with minimal human intervention, presenting a promising solution for enhancing automation in 5G/6G networks. However, the implementation of ZTNs brings forth the need for autonomous and robust cybersecurity solutions, as ZTNs rely heavily on automation. AI/ML algorithms are widely used to develop cybersecurity mechanisms, but require substantial specialized expertise and encounter model drift issues, posing significant challenges in developing autonomous cybersecurity measures. Therefore, this paper proposes an automated security framework targeting Physical Layer Authentication (PLA) and Cross-Layer Intrusion Detection Systems (CLIDS) to address security concerns at multiple Internet protocol layers. The proposed framework employs drift-adaptive online learning techniques and a novel enhanced Successive Halving (SH)-based Automated ML (AutoML) method to automatically generate optimized ML models for dynamic networking environments. Experimental results illustrate that the proposed framework achieves high performance on the public Radio Frequency (RF) fingerprinting and the Canadian Institute for CICIDS2017 datasets, showcasing its effectiveness in addressing PLA and CLIDS tasks within dynamic and complex networking environments. Furthermore, the paper explores open challenges and research directions in the 5G/6G cybersecurity domain. This framework represents a significant advancement towards fully autonomous and secure 6G networks, paving the way for future innovations in network automation and cybersecurity.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Oceania > Australia > New South Wales (0.14)
- North America > Canada > Ontario > Middlesex County > London (0.14)
- (7 more...)
- Research Report > Promising Solution (1.00)
- Overview (1.00)
- Research Report > New Finding (0.92)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
A Machine Learning based Hybrid Receiver for 5G NR PRACH
Singh, Rohit, Yerrapragada, Anil Kumar, Ganti, Radha Krishna
Random Access is a critical procedure using which a User Equipment (UE) identifies itself to a Base Station (BS). Random Access starts with the UE transmitting a random preamble on the Physical Random Access Channel (PRACH). In a conventional BS receiver, the UE's specific preamble is identified by correlation with all the possible preambles. The PRACH signal is also used to estimate the timing advance which is induced by propagation delay. Correlation-based receivers suffer from false peaks and missed detection in scenarios dominated by high fading and low signal-to-noise ratio. This paper describes the design of a hybrid receiver that consists of an AI/ML model for preamble detection followed by conventional peak detection for the Timing Advance estimation. The proposed receiver combines the Power Delay Profiles of correlation windows across multiple antennas and uses the combination as input to a Neural Network model. The model predicts the presence or absence of a user in a particular preamble window, after which the timing advance is estimated by peak detection. Results show superior performance of the hybrid receiver compared to conventional receivers both for simulated and real hardware-captured datasets.